Lower Bounds for Higher-Order Convex Optimization

نویسندگان

  • Naman Agarwal
  • Elad Hazan
چکیده

State-of-the-art methods in convex and non-convex optimization employ higher-order derivative information, either implicitly or explicitly. We explore the limitations of higher-order optimization and prove that even for convex optimization, a polynomial dependence on the approximation guarantee and higher-order smoothness parameters is necessary. As a special case, we show Nesterov’s accelerated cubic regularization method to be nearly tight. [email protected], Computer Science, Princeton University [email protected], Computer Science, Princeton University

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Excessive Gap Technique in Nonsmooth Convex Minimization

In this paper we introduce a new primal-dual technique for convergence analysis of gradient schemes for nonsmooth convex optimization. As an example of its application, we derive a primal-dual gradient method for a special class of structured nonsmooth optimization problems, which ensures a rate of convergence of order O( 1 k ), where k is the iteration count. Another example is a gradient sche...

متن کامل

Distributed Non-Convex First-Order Optimization and Information Processing: Lower Complexity Bounds and Rate Optimal Algorithms

We consider a class of distributed non-convex optimization problems often arises in modern distributed signal and information processing, in which a number of agents connected by a network G collectively optimize a sum of smooth (possibly non-convex) local objective functions. We address the following fundamental question: For a class of unconstrained non-convex problems with Lipschitz continuo...

متن کامل

Convex relaxations of chance constrained optimization problems

In this paper we develop convex relaxations of chance constrained optimization problems in order to obtain lower bounds on the optimal value. Unlike existing statistical lower bounding techniques, our approach is designed to provide deterministic lower bounds. We show that a version of the proposed scheme leads to a tractable convex relaxation when the chance constraint function is affine with ...

متن کامل

Oracle Complexity of Second-Order Methods for Smooth Convex Optimization

Second-order methods, which utilize gradients as well as Hessians to optimize a given function, are of major importance in mathematical optimization. In this work, we study the oracle complexity of such methods, or equivalently, the number of iterations required to optimize a function to a given accuracy. Focusing on smooth and convex functions, we derive (to the best of our knowledge) the firs...

متن کامل

On convex relaxations of quadrilinear terms

The best known method to find exact or at least ε-approximate solutions to polynomial programming problems is the spatial Branch-and-Bound algorithm, which rests on computing lower bounds to the value of the objective function to be minimized on each region that it explores. These lower bounds are often computed by solving convex relaxations of the original program. Although convex envelopes ar...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1710.10329  شماره 

صفحات  -

تاریخ انتشار 2017